Random Variable
The idea of a random variable allows one to represent the notion of some variable whose values are random in nature, where the underlying measure space and the shape of the random variable itself determine the random distribution.
This allows one to some kind of "value" or "weighting" to events within the sample space.
A function \(X : \Omega \to E\) is called a random variable if it is a measurable function from a sample space \(\Omega\) of a probability space to measurable space \(E\).
We write:
for the probability of the random variable assuming a value within the subset \(S\) of the codomain.
This function \(\mu(S) = P(X \in S)\) is a measure on the measurable space \(E\) which turns it into a probability space. It is called, in the more general context of measure theory, the pushforward measure.
This conversion from one probability space to another can be thought of as a kind of "weighting" of the probabilities.
Example
For example, consider a game in which one's score is determined by the sum of the results on two dice. Ultimately the probabilities we are interested in is the sum, however constructing this probability space directly is much more difficult than looking at the underlying probability space of outcomes of the two dice.
As such, we construct a sample space \(\Omega = \{(1, 1), (1, 2), \dots, (6, 6)\}\), in which each outcome is equally likely with probability \(\frac{1}{36}\) (we effectively use the counting measure multiplied by this adjustment factor).
Now we construct our random variable to transform this underlying space to our desired set of outcomes:
That is, the sum of the values on each dice.
Now, the probability of a sum attaining any specific value is the probability in the underlying space of any outcome which, under \(X\) produces the desired outcome.
For example, consider:
A random variable is called discrete if its range is countable, and continuous if its range is uncountable.